Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Language
Document Type
Year range
1.
researchsquare; 2020.
Preprint in English | PREPRINT-RESEARCHSQUARE | ID: ppzbmed-10.21203.rs.3.rs-56078.v1

ABSTRACT

Background: We introduce a novel speech processing framework, the MIT CBMM Open Voice Brain Model (OVBM), combining implementations of the 4 modules of intelligence: The brain OS chunks and overlaps audio samples and transfers CNN features from the sensory stream and cognitive core creating a multi-modal graph neural network of symbolic compositional models for the target task.Methods: Our approach consists of pre-training models to extract acoustic features from selected biomarkers and then leverage transfer learning to combine the biomarker feature extractors into a graph neural network to provide an explainable diagnsotic for Alzheimer's Dementia (AD) using speech recordings.Results: We apply OVBM to the automated diagnostic of Alzheimer's Dementia patients, achieving above state-of-the-art accuracy of 93.8% using only raw audio, while extracting a personalized subject saliency map to track relative disease progression of 16 explainable biomarkers.Conclusion: By using independent biomarker models, OVBM lets health experts explore biomarker features and whether there are common biomarkers features between AD and other diseases like COVID-19. We present a novel lungs and respiratory tract biomarker created using 200.000+ cough samples to pre-train a model discriminating English from Catalan coughs. Transfer Learning is subsequently used to transfer features from this model with various other biomarker OVBM models. This strategy yielded consistent improvements in ADdetection, no matter the combination used. This cough dataset sets a new benchmark as largest audio health dataset with 30.000+ subjects participating in April 2020, demonstrating for the rst time cough cultural bias.


Subject(s)
Alzheimer Disease , Voice Disorders , COVID-19 , Cough
2.
arxiv; 2020.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2004.06510v1

ABSTRACT

Just like your phone can detect what song is playing in crowded spaces, we show that Artificial Intelligence transfer learning algorithms trained on cough phone recordings results in diagnostic tests for COVID-19. To gain adoption by the health care community, we plan to validate our results in a clinical trial and three other venues in Mexico, Spain and the USA . However, if we had data from other on-going clinical trials and volunteers, we may do much more. For example, for confirmed stay-at-home COVID-19 patients, a longitudinal audio test could be developed to determine contact-with-hospital recommendations, and for the most critical COVID-19 patients a success ratio forecast test, including patient clinical data, to prioritize ICU allocation. As a challenge to the engineering community and in the context of our clinical trial, the authors suggest distributing cough recordings daily, hoping other trials and crowdsourcing users will contribute more data. Previous approaches to complex AI tasks have either used a static dataset or were private efforts led by large corporations. All existing COVID-19 trials published also follow this paradigm. Instead, we suggest a novel open collective approach to large-scale real-time health care AI. We will be posting updates at https://opensigma.mit.edu. Our personal view is that our approach is the right one for large scale pandemics, and therefore is here to stay - will you join?


Subject(s)
COVID-19
SELECTION OF CITATIONS
SEARCH DETAIL